Search Results: "wart"

10 August 2022

Shirish Agarwal: Mum, Samsung Galaxy M-52

Mum I dunno from where to start. While I m not supposed to announce it, mum left this earth a month ago (thirteen days when I started to write this blog post) ago. I am still in part denial, part shock, and morose. Of all the seasons in a year, the rainy season used to be my favorite, now would I ever be able to look and feel other than the emptiness that this season has given me? In some senses, it is and was very ironic, when she became ill about last year, I had promised myself I would be by her side for 5-6 years, not go anywhere either Hillhacks or Debconf or any meetup and I was ok with that. Now that she s no more I have no clue why am I living. What is the purpose, the utility? When she was alive, the utility was understandable. We had an unspoken agreement, I would like after her, and she was supposed to look after me. A part of me self-blames as I am sure, I have done thousands of things wrong otherwise the deal was that she was going to be for another decade. But now that she has left not even halfway, I dunno what to do. I don t have someone to fight with anymore  It s mostly a robotic existence atm. I try to distract myself via movies, web series, the web, books, etc. whatever can take my mind off. From the day she died to date, I have a lower back pain which acts as a reminder. It s been a month, I eat, drink, and am surviving but still feel empty. I do things suggested by extended family but within there is no feeling, just emptiness :(. I have no clue if things will get better and even if I do want the change. I clearly have no idea, so let me share a little about what I know.

Samsung Galaxy M-52G Just a couple of days before she died, part of our extended family had come and she chose that opportunity to gift me Samsung Galaxy M-52G even though my birthday was 3 months away. Ironically, after I purchased it, the next day, one of the resellers of the phone cut the price from INR 28k to 20k. If a day more, I could have saved another 8k/- but what s done is done  To my mind, the phone is middling yet a solid phone. I had the phone drop accidentally at times but not a single scratch or anything like that. One can look at the specs in greater detail on fccd.io. Before the recent price drop, as I shared it was a mid-range phone so am gonna review it on that basis itself. One of the first things I did is to buy a plastic cover as well as a cover shield even though the original one is meant to work for a year or more. This was simply for added protection and it has served me to date. Even with the additional weight, I can easily use it with one hand. It only becomes problematic when using chatting apps. such as Whatsapp, Telegram, Quicksy and a few others where it comes with Samsung keyboard with the divided/split keyboard. The A.I. for guessing words and sentences are spot-on when you are doing it in English but if you try a mixture of Hinglish (Hindi and English) that becomes a bit of a nightmare. Tryng to each A.I; new words is something of a task. I wish there was an interface in which I could train the A.I. so it could be served for Hinglish words also. I do think it does, but it s too rudimentary as it is to be any useful at least where it is now.

WiFi Direct While my previous phone did use wifi direct but it that ancient android version wasn t wedded to Wifi Direct as this one is. You have essentially two ways to connect to any system outside. One is through Wi-Fi Direct and the more expensive way is through mobile data. One of the strange things I found quite a number of times, that Wifi would lose it pairings. Before we get into it, Wikipedia has a good explanation of what Wifi direct is all about. Apparently, either my phone or my modem loses the pairing, which of the two is the culprit, I really don t know. There are two apps from the Play Store that do help in figuring out what the issue is (although it is limited in what it gives out in info. but still good.) The first one is Wifi Signal Meter and the other one is WifiAnalyzer (open-source). I have found that pairing done through Wifi Signal Meter works better than through Google s own implementation which feels lacking. The whole universe of Android seems to be built on apps and games and many of these can be bought for money, but many of these can also be played using a combination of micro-transactions and ads. For many a game, you cannot play for more than 5 minutes before you either see an ad or wait for something like 2-3 hrs. before you attempt again. Hogwarts Mystery, for e.g., is an example of that. Another one would be Explore Lands . While Hogwarts Mystery is more towards the lore created by J.K.Rowling and you can really get into the thick of things if you know the lore, Explore lands is more into Exploration of areas. In both the games, you are basically looking to gain energy over a period of time, which requires either money or viewing ads or a combination of both Sadly most ads and even Google don t seem to have caught up that I m deaf so most ads do not have subtitles, so more often than not they are useless to me. I have found also that many games share screenshots or videos that have nothing to do with how the game is. So there is quite a bit of misleading going on. I did read that Android had been having issues with connecting with developers after their app. is in the Play Store. Most apps. ask and require a whole lot of permissions that aren t needed by that app.

F-Droid Think Pirate Praveen had introduced me to F-Droid and a whole lot of things have happened in F-Droid, lot more apps. games etc. the look of F-Droid has been pulled back. In fact, I found Neo Store to be a better skin to see F-Droid. I have yet to explore more of F-Droid before sharing any recommendations and spending some time on it. I do find that many of foss apps. do need to work on how we communicate with our users. For e.g. one app. that Praveen had shared with me recently was Quicksy. And while it is better, it uses a double negative while asking permission whether it should or not to use more of the phone s resources. It is an example of that sort of language that we need to be aware of and be better. I know this post is more on the mobile rather than the desktop but that is where I m living currently.

31 December 2021

Chris Lamb: Favourite books of 2021: Fiction

In my two most recent posts, I listed the memoirs and biographies and followed this up with the non-fiction I enjoyed the most in 2021. I'll leave my roundup of 'classic' fiction until tomorrow, but today I'll be going over my favourite fiction. Books that just miss the cut here include Kingsley Amis' comic Lucky Jim, Cormac McCarthy's The Road (although see below for McCarthy's Blood Meridian) and the Complete Adventures of Tintin by Herg , the latter forming an inadvertently incisive portrait of the first half of the 20th century. Like ever, there were a handful of books that didn't live up to prior expectations. Despite all of the hype, Emily St. John Mandel's post-pandemic dystopia Station Eleven didn't match her superb The Glass Hotel (one of my favourite books of 2020). The same could be said of John le Carr 's The Spy Who Came in from the Cold, which felt significantly shallower compared to Tinker, Tailor, Soldier, Spy again, a favourite of last year. The strangest book (and most difficult to classify at all) was undoubtedly Patrick S skind's Perfume: The Story of a Murderer, and the non-fiction book I disliked the most was almost-certainly Beartown by Fredrik Bachman. Two other mild disappointments were actually film adaptions. Specifically, the original source for Vertigo by Pierre Boileau and Thomas Narcejac didn't match Alfred Hitchock's 1958 masterpiece, as did James Sallis' Drive which was made into a superb 2011 neon-noir directed by Nicolas Winding Refn. These two films thus defy the usual trend and are 'better than the book', but that's a post for another day.

A Wizard of Earthsea (1971) Ursula K. Le Guin How did it come to be that Harry Potter is the publishing sensation of the century, yet Ursula K. Le Guin's Earthsea is only a popular cult novel? Indeed, the comparisons and unintentional intertextuality with Harry Potter are entirely unavoidable when reading this book, and, in almost every respect, Ursula K. Le Guin's universe comes out the victor. In particular, the wizarding world that Le Guin portrays feels a lot more generous and humble than the class-ridden world of Hogwarts School of Witchcraft and Wizardry. Just to take one example from many, in Earthsea, magic turns out to be nurtured in a bottom-up manner within small village communities, in almost complete contrast to J. K. Rowling's concept of benevolent government departments and NGOs-like institutions, which now seems a far too New Labour for me. Indeed, imagine an entire world imbued with the kindly benevolence of Dumbledore, and you've got some of the moral palette of Earthsea. The gently moralising tone that runs through A Wizard of Earthsea may put some people off:
Vetch had been three years at the School and soon would be made Sorcerer; he thought no more of performing the lesser arts of magic than a bird thinks of flying. Yet a greater, unlearned skill he possessed, which was the art of kindness.
Still, these parables aimed directly at the reader are fairly rare, and, for me, remain on the right side of being mawkish or hectoring. I'm thus looking forward to reading the next two books in the series soon.

Blood Meridian (1985) Cormac McCarthy Blood Meridian follows a band of American bounty hunters who are roaming the Mexican-American borderlands in the late 1840s. Far from being remotely swashbuckling, though, the group are collecting scalps for money and killing anyone who crosses their path. It is the most unsparing treatment of American genocide and moral depravity I have ever come across, an anti-Western that flouts every convention of the genre. Blood Meridian thus has a family resemblance to that other great anti-Western, Once Upon a Time in the West: after making a number of gun-toting films that venerate the American West (ie. his Dollars Trilogy), Sergio Leone turned his cynical eye to the western. Yet my previous paragraph actually euphemises just how violent Blood Meridian is. Indeed, I would need to be a much better writer (indeed, perhaps McCarthy himself) to adequately 0utline the tone of this book. In a certain sense, it's less than you read this book in a conventional sense, but rather that you are forced to witness successive chapters of grotesque violence... all occurring for no obvious reason. It is often said that books 'subvert' a genre and, indeed, I implied as such above. But the term subvert implies a kind of Puck-like mischievousness, or brings to mind court jesters licensed to poke fun at the courtiers. By contrast, however, Blood Meridian isn't funny in the slightest. There isn't animal cruelty per se, but rather wanton negligence of another kind entirely. In fact, recalling a particular passage involving an injured horse makes me feel physically ill. McCarthy's prose is at once both baroque in its language and thrifty in its presentation. As Philip Connors wrote back in 2007, McCarthy has spent forty years writing as if he were trying to expand the Old Testament, and learning that McCarthy grew up around the Church therefore came as no real surprise. As an example of his textual frugality, I often looked for greater precision in the text, finding myself asking whether who a particular 'he' is, or to which side of a fight some two men belonged to. Yet we must always remember that there is no precision to found in a gunfight, so this infidelity is turned into a virtue. It's not that these are fair fights anyway, or even 'murder': Blood Meridian is just slaughter; pure butchery. Murder is a gross understatement for what this book is, and at many points we are grateful that McCarthy spares us precision. At others, however, we can be thankful for his exactitude. There is no ambiguity regarding the morality of the puppy-drowning Judge, for example: a Colonel Kurtz who has been given free license over the entire American south. There is, thank God, no danger of Hollywood mythologising him into a badass hero. Indeed, we must all be thankful that it is impossible to film this ultra-violent book... Indeed, the broader idea of 'adapting' anything to this world is, beyond sick. An absolutely brutal read; I cannot recommend it highly enough.

Bodies of Light (2014) Sarah Moss Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage within an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, this poignant book follows the studiously intelligent Alethia 'Ally' Moberly who is struggling to gain the acceptance of herself, her mother and the General Medical Council. You can read my full review from July.

House of Leaves (2000) Mark Z. Danielewski House of Leaves is a remarkably difficult book to explain. Although the plot refers to a fictional documentary about a family whose house is somehow larger on the inside than the outside, this quotidian horror premise doesn't explain the complex meta-commentary that Danielewski adds on top. For instance, the book contains a large number of pseudo-academic footnotes (many of which contain footnotes themselves), with references to scholarly papers, books, films and other articles. Most of these references are obviously fictional, but it's the kind of book where the joke is that some of them are not. The format, structure and typography of the book is highly unconventional too, with extremely unusual page layouts and styles. It's the sort of book and idea that should be a tired gimmick but somehow isn't. This is particularly so when you realise it seems specifically designed to create a fandom around it and to manufacturer its own 'cult' status, something that should be extremely tedious. But not only does this not happen, House of Leaves seems to have survived through two exhausting decades of found footage: The Blair Witch Project and Paranormal Activity are, to an admittedly lesser degree, doing much of the same thing as House of Leaves. House of Leaves might have its origins in Nabokov's Pale Fire or even Derrida's Glas, but it seems to have more in common with the claustrophobic horror of Cube (1997). And like all of these works, House of Leaves book has an extremely strange effect on the reader or viewer, something quite unlike reading a conventional book. It wasn't so much what I got out of the book itself, but how it added a glow to everything else I read, watched or saw at the time. An experience.

Milkman (2018) Anna Burns This quietly dazzling novel from Irish author Anna Burns is full of intellectual whimsy and oddball incident. Incongruously set in 1970s Belfast during The Irish Troubles, Milkman's 18-year-old narrator (known only as middle sister ), is the kind of dreamer who walks down the street with a Victorian-era novel in her hand. It's usually an error for a book that specifically mention other books, if only because inviting comparisons to great novels is grossly ill-advised. But it is a credit to Burns' writing that the references here actually add to the text and don't feel like they are a kind of literary paint by numbers. Our humble narrator has a boyfriend of sorts, but the figure who looms the largest in her life is a creepy milkman an older, married man who's deeply integrated in the paramilitary tribalism. And when gossip about the narrator and the milkman surfaces, the milkman beings to invade her life to a suffocating degree. Yet this milkman is not even a milkman at all. Indeed, it's precisely this kind of oblique irony that runs through this daring but darkly compelling book.

The First Fifteen Lives of Harry August (2014) Claire North Harry August is born, lives a relatively unremarkable life and finally dies a relatively unremarkable death. Not worth writing a novel about, I suppose. But then Harry finds himself born again in the very same circumstances, and as he grows from infancy into childhood again, he starts to remember his previous lives. This loop naturally drives Harry insane at first, but after finding that suicide doesn't stop the quasi-reincarnation, he becomes somewhat acclimatised to his fate. He prospers much better at school the next time around and is ultimately able to make better decisions about his life, especially when he just happens to know how to stay out of trouble during the Second World War. Yet what caught my attention in this 'soft' sci-fi book was not necessarily the book's core idea but rather the way its connotations were so intelligently thought through. Just like in a musical theme and varations, the success of any concept-driven book is far more a product of how the implications of the key idea are played out than how clever the central idea was to begin with. Otherwise, you just have another neat Borges short story: satisfying, to be sure, but in a narrower way. From her relatively simple premise, for example, North has divined that if there was a community of people who could remember their past lives, this would actually allow messages and knowledge to be passed backwards and forwards in time. Ah, of course! Indeed, this very mechanism drives the plot: news comes back from the future that the progress of history is being interfered with, and, because of this, the end of the world is slowly coming. Through the lives that follow, Harry sets out to find out who is passing on technology before its time, and work out how to stop them. With its gently-moralising romp through the salient historical touchpoints of the twentieth century, I sometimes got a whiff of Forrest Gump. But it must be stressed that this book is far less certain of its 'right-on' liberal credentials than Robert Zemeckis' badly-aged film. And whilst we're on the topic of other media, if you liked the underlying conceit behind Stuart Turton's The Seven Deaths of Evelyn Hardcastle yet didn't enjoy the 'variations' of that particular tale, then I'd definitely give The First Fifteen Lives a try. At the very least, 15 is bigger than 7. More seriously, though, The First Fifteen Lives appears to reflect anxieties about technology, particularly around modern technological accelerationism. At no point does it seriously suggest that if we could somehow possess the technology from a decade in the future then our lives would be improved in any meaningful way. Indeed, precisely the opposite is invariably implied. To me, at least, homo sapiens often seems to be merely marking time until we can blow each other up and destroying the climate whilst sleepwalking into some crisis that might precipitate a thermonuclear genocide sometimes seems to be built into our DNA. In an era of cli-fi fiction and our non-fiction newspaper headlines, to label North's insight as 'prescience' might perhaps be overstating it, but perhaps that is the point: this destructive and negative streak is universal to all periods of our violent, insecure species.

The Goldfinch (2013) Donna Tartt After Breaking Bad, the second biggest runaway success of 2014 was probably Donna Tartt's doorstop of a novel, The Goldfinch. Yet upon its release and popular reception, it got a significant number of bad reviews in the literary press with, of course, an equal number of predictable think pieces claiming this was sour grapes on the part of the cognoscenti. Ah, to be in 2014 again, when our arguments were so much more trivial. For the uninitiated, The Goldfinch is a sprawling bildungsroman that centres on Theo Decker, a 13-year-old whose world is turned upside down when a terrorist bomb goes off whilst visiting the Metropolitan Museum of Art, killing his mother among other bystanders. Perhaps more importantly, he makes off with a painting in order to fulfil a promise to a dying old man: Carel Fabritius' 1654 masterpiece The Goldfinch. For the next 14 years (and almost 800 pages), the painting becomes the only connection to his lost mother as he's flung, almost entirely rudderless, around the Western world, encountering an array of eccentric characters. Whatever the critics claimed, Tartt's near-perfect evocation of scenes, from the everyday to the unimaginable, is difficult to summarise. I wouldn't label it 'cinematic' due to her evocation of the interiority of the characters. Take, for example: Even the suggestion that my father had close friends conveyed a misunderstanding of his personality that I didn't know how to respond it's precisely this kind of relatable inner subjectivity that cannot be easily conveyed by film, likely is one of the main reasons why the 2019 film adaptation was such a damp squib. Tartt's writing is definitely not 'impressionistic' either: there are many near-perfect evocations of scenes, even ones we hope we cannot recognise from real life. In particular, some of the drug-taking scenes feel so credibly authentic that I sometimes worried about the author herself. Almost eight months on from first reading this novel, what I remember most was what a joy this was to read. I do worry that it won't stand up to a more critical re-reading (the character named Xandra even sounds like the pharmaceuticals she is taking), but I think I'll always treasure the first days I spent with this often-beautiful novel.

Beyond Black (2005) Hilary Mantel Published about five years before the hyperfamous Wolf Hall (2004), Hilary Mantel's Beyond Black is a deeply disturbing book about spiritualism and the nature of Hell, somewhat incongruously set in modern-day England. Alison Harte is a middle-aged physic medium who works in the various towns of the London orbital motorway. She is accompanied by her stuffy assistant, Colette, and her spirit guide, Morris, who is invisible to everyone but Alison. However, this is no gentle and musk-smelling world of the clairvoyant and mystic, for Alison is plagued by spirits from her past who infiltrate her physical world, becoming stronger and nastier every day. Alison's smiling and rotund persona thus conceals a truly desperate woman: she knows beyond doubt the terrors of the next life, yet must studiously conceal them from her credulous clients. Beyond Black would be worth reading for its dark atmosphere alone, but it offers much more than a chilling and creepy tale. Indeed, it is extraordinarily observant as well as unsettlingly funny about a particular tranche of British middle-class life. Still, the book's unnerving nature that sticks in the mind, and reading it noticeably changed my mood for days afterwards, and not necessarily for the best.

The Wall (2019) John Lanchester The Wall tells the story of a young man called Kavanagh, one of the thousands of Defenders standing guard around a solid fortress that envelopes the British Isles. A national service of sorts, it is Kavanagh's job to stop the so-called Others getting in. Lanchester is frank about what his wall provides to those who stand guard: the Defenders of the Wall are conscripted for two years on the Wall, with no exceptions, giving everyone in society a life plan and a story. But whilst The Wall is ostensibly about a physical wall, it works even better as a story about the walls in our mind. In fact, the book blends together of some of the most important issues of our time: climate change, increasing isolation, Brexit and other widening societal divisions. If you liked P. D. James' The Children of Men you'll undoubtedly recognise much of the same intellectual atmosphere, although the sterility of John Lanchester's dystopia is definitely figurative and textual rather than literal. Despite the final chapters perhaps not living up to the world-building of the opening, The Wall features a taut and engrossing narrative, and it undoubtedly warrants even the most cursory glance at its symbolism. I've yet to read something by Lanchester I haven't enjoyed (even his short essay on cheating in sports, for example) and will be definitely reading more from him in 2022.

The Only Story (2018) Julian Barnes The Only Story is the story of Paul, a 19-year-old boy who falls in love with 42-year-old Susan, a married woman with two daughters who are about Paul's age. The book begins with how Paul meets Susan in happy (albeit complicated) circumstances, but as the story unfolds, the novel becomes significantly more tragic and moving. Whilst the story begins from the first-person perspective, midway through the book it shifts into the second person, and, later, into the third as well. Both of these narrative changes suggested to me an attempt on the part of Paul the narrator (if not Barnes himself), to distance himself emotionally from the events taking place. This effect is a lot more subtle than it sounds, however: far more prominent and devastating is the underlying and deeply moving story about the relationship ends up. Throughout this touching book, Barnes uses his mastery of language and observation to avoid the saccharine and the maudlin, and ends up with a heart-wrenching and emotive narrative. Without a doubt, this is the saddest book I read this year.

29 November 2021

Russ Allbery: Fall haul

It's been a while since I've posted one of these, and I also may have had a few moments of deciding to support authors by buying their books even if I'm not going to get a chance to read them soon. There's also a bit of work reading in here. Ryka Aoki Light from Uncommon Stars (sff)
Frederick R. Chromey To Measure the Sky (non-fiction)
Neil Gaiman, et al. Sandman: Overture (graphic novel)
Alix E. Harrow A Spindle Splintered (sff)
Jordan Ifueko Raybearer (sff)
Jordan Ifueko Redemptor (sff)
T. Kingfisher Paladin's Hope (sff)
TJ Klune Under the Whispering Door (sff)
Kiese Laymon How to Slowly Kill Yourself and Others in America (non-fiction)
Yuna Lee Fox You (romance)
Tim Mak Misfire (non-fiction)
Naomi Novik The Last Graduate (sff)
Shelley Parker-Chan She Who Became the Sun (sff)
Gareth L. Powell Embers of War (sff)
Justin Richer & Antonio Sanso OAuth 2 in Action (non-fiction)
Dean Spade Mutual Aid (non-fiction)
Lana Swartz New Money (non-fiction)
Adam Tooze Shutdown (non-fiction)
Bill Watterson The Essential Calvin and Hobbes (strip collection)
Bill Willingham, et al. Fables: Storybook Love (graphic novel)
David Wong Real-World Cryptography (non-fiction)
Neon Yang The Black Tides of Heaven (sff)
Neon Yang The Red Threads of Fortune (sff)
Neon Yang The Descent of Monsters (sff)
Neon Yang The Ascent to Godhood (sff)
Xiran Jay Zhao Iron Widow (sff)

21 September 2021

Russell Coker: Links September 2021

Matthew Garrett wrote an interesting and insightful blog post about the license of software developed or co-developed by machine-learning systems [1]. One of his main points is that people in the FOSS community should aim for less copyright protection. The USENIX ATC 21/OSDI 21 Joint Keynote Address titled It s Time for Operating Systems to Rediscover Hardware has some inssightful points to make [2]. Timothy Roscoe makes some incendiaty points but backs them up with evidence. Is Linux really an OS? I recommend that everyone who s interested in OS design watch this lecture. Cory Doctorow wrote an interesting set of 6 articles about Disneyland, ride pricing, and crowd control [3]. He proposes some interesting ideas for reforming Disneyland. Benjamin Bratton wrote an insightful article about how philosophy failed in the pandemic [4]. He focuses on the Italian philosopher Giorgio Agamben who has a history of writing stupid articles that match Qanon talking points but with better language skills. Arstechnica has an interesting article about penetration testers extracting an encryption key from the bus used by the TPM on a laptop [5]. It s not a likely attack in the real world as most networks can be broken more easily by other methods. But it s still interesting to learn about how the technology works. The Portalist has an article about David Brin s Startide Rising series of novels and his thought s on the concept of Uplift (which he denies inventing) [6]. Jacobin has an insightful article titled You re Not Lazy But Your Boss Wants You to Think You Are [7]. Making people identify as lazy is bad for them and bad for getting them to do work. But this is the first time I ve seen it described as a facet of abusive capitalism. Jacobin has an insightful article about free public transport [8]. Apparently there are already many regions that have free public transport (Tallinn the Capital of Estonia being one example). Fare free public transport allows bus drivers to concentrate on driving not taking fares, removes the need for ticket inspectors, and generally provides a better service. It allows passengers to board buses and trams faster thus reducing traffic congestion and encourages more people to use public transport instead of driving and reduces road maintenance costs. Interesting research from Israel about bypassing facial ID [9]. Apparently they can make a set of 9 images that can pass for over 40% of the population. I didn t expect facial recognition to be an effective form of authentication, but I didn t expect it to be that bad. Edward Snowden wrote an insightful blog post about types of conspiracies [10]. Kevin Rudd wrote an informative article about Sky News in Australia [11]. We need to have a Royal Commission now before we have our own 6th Jan event. Steve from Big Mess O Wires wrote an informative blog post about USB-C and 4K 60Hz video [12]. Basically you can t have a single USB-C hub do 4K 60Hz video and be a USB 3.x hub unless you have compression software running on your PC (slow and only works on Windows), or have DisplayPort 1.4 or Thunderbolt (both not well supported). All of the options are not well documented on online store pages so lots of people will get unpleasant surprises when their deliveries arrive. Computers suck. Steinar H. Gunderson wrote an informative blog post about GaN technology for smaller power supplies [13]. A 65W USB-C PSU that fits the usual wall wart form factor is an interesting development.

11 August 2021

Bits from Debian: Debian User Forums changes and updates.

DebianUserForums Several issues were brought before the Debian Community team regarding responsiveness, tone, and needed software updates to forums.debian.net. The question was asked, who s in charge? Over the course of the discussion several Debian Developers volunteered to help by providing a presence on the forums from Debian and to assist with the necessary changes to keep the service up and running. We are happy to announce the following changes to the (NEW!) forums.debian.net, which have and should address most of the prior concerns with accountability, tone, use, and reliability: Debian Developers: Paulo Henrique de Lima Santana (phls), Felix Lechner (lechner), and Donald Norwood (donald) have been added to the forum's Server and Administration teams. The server instance is now running directly within Debian's infrastructure. The forum software and back-end have been updated to the most recent versions where applicable. DNS resolves for both IPv4 and IPv6. SSL/HTTPS are enabled. (It s 2021!) New Captcha and Anti-spam systems are in place to thwart spammers, bots, and to make it easier for humans to register. New Administrators and Moderation staff were added to provide additional coverage across the hours and to combine years of experience with forum operation and Debian usage. New viewing styles are available for users to choose from, some of which are ideal for mobile/tablet viewing. We inadvertently fixed the time issue that the prior forum had of running 11 minutes fast. :) We have clarified staff roles and staff visibility. Responsiveness to users on the forums has increased. Email addresses for mods/admins have been updated and checked for validity, it has seen direct use and response. The guidelines for forum use by users and staff have been updated. The Debian COC has been made into a Global Announcement as an accompanyist to the newly updated guidelines to give the moderators/administrators an additional rule-set for unruly or unbecoming behavior. Some of the discussion areas have been renamed and refocused, along with the movement of multiple threads to make indexing and searching of the forums easier. Many (New!) features and extensions have been added to the forum for ease of use and modernization, such as a user thanks system and thread hover previews. There are some server administrative tasks that were upgraded as well which don't belong on a public list, but we are backing up regularly and secure. :) We have a few minor details here and there to attend to and the work is ongoing. Many Thanks and Appreciation to the Debian System Administrators (DSA) and Ganneff who took the time to coordinate and assist with the instance, DNS, and network and server administration minutiae, our helpful DPL Jonathan Carter, many thanks to the current and prior forum moderators and administrators: Mez, sunrat, 4D696B65, arochester, and cds60601 for helping with the modifications and transition, and to the forum users who participated in lots of the tweaking. All in all this was a large community task and everyone did a significant part. Thank you!

18 July 2021

Shirish Agarwal: BBI Kenyan Supreme Court, U.P. Population Bill, South Africa, Suli Deals , IT rules 2021, Sedition Law and Danish Siddiqui s death.

BBI Kenya and live Supreme Court streaming on YT The last few weeks have been unrelenting as all sorts of news have been coming in, mostly about the downturn in the Economy, Islamophobia in India on the rise, Covid, and electioneering. However, in the last few days, Kenya surpassed India in live-streaming proceeds in a Court of Appeals about BBI or Building Bridges Initiative. A background filler article on the topic can be found in BBC. The live-streaming was done via YT and if wants to they can start from

https://www.youtube.com/watch?v=JIQzpmVKvro One can also subscribe to K24TV which took the initiative of sharing the proceedings with people worldwide. If K24TV continues to share SC proceedings of Kenya, that would add to the soft power of Kenya. I will not go into the details of the case as Gautam Bhatia who has been following the goings-on in Kenya is a far better authority on the subject. In fact, just recently he shared about another Kenyan judgment from a trial which can be seen here. He has shared the proceedings and some hot takes on the Twitter thread started by him. Probably after a couple of weeks or more when he has processed what all has happened there, he may also share some nuances although many of his thoughts would probably go to his book on Comparative Constitutional Law which he hopes to publish maybe in 2021/2022 or whenever he can. Such televised proceedings are sure to alleviate the standing of Kenya internationally. There has been a proposal to do similar broadcasts by India but with surveillance built-in, so they know who is watching. The problems with the architecture and the surveillance built-in have been shared by Srinivas Kodali or DigitalDutta quite a few times, but that probably is a story for another day.

Uttar Pradesh Population Control Bill
Hindus comprise 83% of Indian couples with more than two child children
The U.P. Population Bill came and it came with lot of prejudices. One of the prejudices is the idea that Muslims create or procreate to have the most children. Even with data is presented as shared above from NFHS National Family Health Survey which is supposed to carry our surveys every few years did the last one around 4 years back. The analysis from it has been instrumental not only in preparing graphs as above but also sharing about what sort of death toll must have been in rural India. And as somebody who have had the opportunity in the past, can vouch that you need to be extremely lucky if something happens to you when you are in a rural area. Even in places like Bodh Gaya (have been there) where millions of tourists come as it is one of the places not to be missed on the Buddhism tourist circuit, the medical facilities are pretty underwhelming. I am not citing it simply because there are too many such newspaper reports from even before the pandemic, and both the State and the Central Govt. response has been dismal. Just a few months back, they were recalled. There were reports of votes being bought at INR 1000/- (around $14) and a bottle or two of liquor. There used to be a time when election monitoring whether national or state used to be a thing, and you had LTO s (Long-time Observers) and STO s (Short-Term Observers) to make sure that the election has been neutral. This has been on the decline in this regime, but that probably is for another time altogether. Although, have to point out the article which I had shared a few months ago on the private healthcare model is flawed especially for rural areas. Instead of going for cheap, telemedicine centers that run some version of a Linux distro. And can provide a variety of services, I know Kerala and Tamil Nadu from South India have experimented in past but such engagements need to be scaled up. This probably will come to know when the next time I visit those places (sadly due to the virus, not anytime soonish.:( ) . Going back to the original topic, though, I had shared Hans Rosling s famous Ted talk on population growth which shows that even countries which we would not normally associate with family planning for e.g. the middle-east and Africa have also been falling quite rapidly. Of course, when people have deeply held prejudices, then it is difficult. Even when sharing China as to how they had to let go of their old policy in 2016 as they had the thing for leftover men . I also shared the powerful movie So Long my Son. I even shared how in Haryana women were and are trafficked and have been an issue for centuries but as neither suits the RW propaganda, they simply refuse to engage. They are more repulsed by people who publish this news rather than those who are actually practicing it, as that is culture . There is also teenage pregnancy, female infanticide, sex-selective abortion, etc., etc. It is just all too horrible to contemplate. Personal anecdote I know a couple, or they used to be a couple, where the gentleman wanted to have a male child. It was only after they got an autistic child, they got their DNA tested and came to know that the gentleman had a genetic problem. He again forced and had another child, and that too turned out to be autistic. Finally, he left the wife and the children, divorced them and lived with another woman. Almost a decade of the wife s life was ruined. The wife before marriage was a gifted programmer employed at IBM. This was an arranged marriage. After this, if you are thinking of marrying, apart from doing astrology charts, also look up DNA compatibility charts. Far better than ruining yours or the women s life. Both the children whom I loved are now in heaven, god bless them  If one wants to, one can read a bit more about the Uttar Pradesh Population bill here. The sad part is that the systems which need fixing, nobody wants to fix. The reason being simple. If you get good health service by public sector, who will go to the private sector. In Europe, AFAIK they have the best medical bang for the money. Even the U.S. looks at Europe and hopes it had the systems that Europe has but that again is probably for another day.

South Africa and India long-lost brothers. As had shared before, after the 2016 South African Debconf convention, I had been following South Africa. I was happy when FeesMustFall worked and in 2017 the then ANC president Zuma declared it in late 2017. I am sure that people who have been regular visitors to this blog know how my position is on student loans. They also must be knowing that even in U.S. till the 1970s it had free education all the way to be a lawyer and getting a lawyer license. It is only when people like Thurgood Marshall, Martin Luther King Jr., and others from the civil rights movement came out as a major force that the capitalists started imposing fees. They wanted people who could be sold to corporate slavery, and they won. Just last week, Biden took some steps and canceled student loans and is working on steps towards broad debt forgiveness. Interestingly, NASA has an affirmative diversity program for people from diverse backgrounds, where a couple of UC (Upper Caste) women got the job. While they got the job, the RW (Right-Wing) was overjoyed as they got jobs on merit . Later, it was found that both the women were the third or fourth generation of immigrants in U.S.
NASA Federal Equal Opportunity Policy Directive NPD 3713 2H
Going back to the original question and topic, while there has been a concerning spate of violence, some calling it the worst sort of violence not witnessed since 1994. The problem, as ascertained in that article, is the same as here in India or elsewhere. Those, again, who have been on my blog know that merit 90% of the time is a function of privilege and there is a vast amount of academic literature which supports that. If, for a moment, you look at the data that is shared in the graph above which shows that 83% of Hindus and 13% of Muslims have more than 2 children, what does it show, it shows that 83+13 = 96% of the population is living in insecurity. The 5% are the ones who have actually consolidated more power during this regime rule in India. Similarly, from what I understood living in Cape Town for about a month, it is the Dutch Afrikaans as they like to call themselves and the immigrants who come from abroad who have enjoyed the fruits of tourism and money and power while the rest of the country is dying due to poverty. It is the same there, it is the same here. Corruption is also rampant in both countries, and the judiciary is virtually absent from both communities in India and SA. Interestingly, South Africa and India have been at loggerheads, but I suspect that is more due to the money and lobbying power by the Dutch. Usually, those who have money power, do get laws and even press on their side, and it is usually the ruling party in power. I cannot help but share about the Gupta brothers and their corruption as I came to know about it in 2016. And as have shared that I m related to Gupta s on my mother s side, not those specific ones but Gupta as a clan. The history of the Gupta dynasty does go back to the 3rd-4th century. Equally interesting have been Sonali Ranade s series of articles which she wrote in National Herald, the latest on exports which is actually the key to taking India out of poverty rather than anything else. While in other countries Exporters are given all sort of subsidies, here it is being worked as how to give them less. This was in Economic times hardly a week back
Export incentive schemes being reduced
I can t imagine the incredible stupidity done by the Finance Minister. And then in an attempt to prove that, they will attempt to present a rosy picture with numbers that have nothing to do with reality. Interestingly enough, India at one time was a major exporter of apples, especially from Kashmir. Now instead of exporting, we are importing them from Afghanistan as well as Belgium and now even from the UK. Those who might not want to use the Twitter link could use this article. Of course, what India got out of this trade deal is not known. One can see that the UK got the better deal from this. Instead of investing in our own capacity expansion, we are investing in increasing the capacity of others. This is at the time when due to fuel price hike (Central taxes 66%) demand is completely flat. And this is when our own CEA (Chief Economic Adviser) tells us that growth will be at the most 6-7% and that too in 2023-2024 while currently, the inflation rate is around 12%. Is it then any wonder that almost 70% are living on Govt. ration and people in the streets of Kolkata, Assam, and other places have to sell kidneys to make sure they have some money for their kids for tomorrow. Now I have nothing against the UK but trade negotiation is an art. Sadly, this has been going on for the last few years. The politicians in India fool the public by always telling of future trade deals. Sadly, as any businessman knows, once you have compromised, you always have to compromise. And the more you compromise, the more you weaken the hand for any future trade deals.
IIT pupil tries to sell kidney to repay loan, but no takers for Dalit organ.
The above was from yesterday s Times of India. Just goes to show how much people are suffering. There have been reports in vernacular papers of quite a few people from across regions and communities are doing this so they can live without pain a bit. Almost all the time, the politicians are saved as only few understand international trade, the diplomacy and the surrounding geopolitics around it. And this sadly, is as much to do with basic education as much as it is to any other factor

Suli Deals About a month back on the holy day of Ramzan or Ramadan as it is known in the west, which is beloved by Muslims, a couple of Muslim women were targeted and virtually auctioned. Soon, there was a flood and a GitHub repository was created where hundreds of Muslim women, especially those who have a voice and fearlessly talk about their understanding about issues and things, were being virtually auctioned. One week after the FIR was put up, to date none of the people mentioned in the FIR have been arrested. In fact, just yesterday, there was an open letter which was published by livelaw. I have saved a copy on WordPress just in case something does go wrong. Other than the disgust we feel, can t say much as no action being taken by GOI and police.

IT Rules 2021 and Big Media After almost a year of sleeping when most activists were screaming hoarsely about how the new IT rules are dangerous for one and all, big media finally woke up a few weeks back and listed a writ petition in Madras High Court of the same. Although to be frank, the real writ petition was filed In February 2021, classical singer, performer T.M. Krishna in Madras High Court. Again, a copy of the writ petition, I have hosted on WordPress. On 23rd June 2021, a group of 13 media outlets and a journalist have challenged the IT Rules, 2021. The Contention came from Digital News Publishers Association which is made up of the following news companies: ABP Network Private Limited, Amar Ujala Limited, DB Corp Limited, Express Network Pvt Ltd, HT Digital Streams Limited, IE Online Media Services Pvt Ltd, Jagran Prakashan Limited, Lokmat Media Private Limited, NDTV Convergence Limited, TV Today Network Limited, The Malayala Manorama Co (P) Ltd, Times Internet Limited, and Ushodaya Enterprises Private Limited. All the above are heavyweights in the markets where they operate. The reason being simple, when these media organizations came into being, the idea was to have self-regulation, which by and large has worked. Now, the present Govt. wants each news item to be okayed by them before publication. This is nothing but blatant misuse of power and an attempt at censorship. In fact, the Tamil Nadu BJP president himself made a promise of the same. And of course, what is true and what is a lie, only GOI knows and will decide for the rest of the country. If somebody remembers Joseph Goebbels at this stage, it is merely a coincidence. Anyways, 3 days ago Supreme Court on 14th July the Honorable Supreme Court asked the Madras High Court to transfer all the petitions to SC. This, the Madras High Court denied as cited/shared by Meera Emmanuel, a reporter who works with barandbench. The Court says nothing doing, let this happen and then the SC can entertain the motion of doing it that level. At the same time, they would have the benefit of Madras High Court opinion as well. It gave the center two weeks to file a reply. So, either of end-week of July or latest by August first week, we might be able to read the Center s reply on the same. The SC could do a forceful intervention, but it would lead to similar outrage as has been witnessed in the past when a judge commented that if the SC has to do it all, then why do we need the High Courts, district courts etc. let all the solutions come from SC itself. This was, admittedly, frustration on the part of the judge, but due in part to the needless intervention of SC time and time again. But the concerns had been felt around all the different courts in the country.

Sedition Law A couple of days ago, the Supreme Court under the guidance of Honorable CJI NV Ramanna, entertained the PIL filed by Maj Gen S G Vombatkere (Retd.) which asked simply that the sedition law which was used in the colonial times by the British to quell dissent by Mahatma Gandhi and Bal Gangadhar Tilak during the Indian freedom struggle. A good background filler article can be found on MSN which tells about some recent cases but more importantly how historically the sedition law was used to quell dissent during India s Independence. Another article on MSN actually elaborates on the PIL filed by Maj Gen S. G. Vombatkere. Another article on MSN tells how sedition law has been challenged and changed in 10 odd countries. I find it equally sad and equally hilarious that the Indian media whose job is to share news and opinion on this topic is being instead of being shared more by MSN. Although, I would be bereft of my duty if I did not share the editorial on the same topic by the Hindu and Deccan Chronicle. Also, an interesting question to ask is, are there only 10 countries in the world that have sedition laws? AFAIK, there are roughly 200 odd countries as recognized by WTO. If 190 odd countries do not have sedition laws, it also tells a lot about them and a lot about the remaining 10. Also, it came to light that police are still filing laws under sec66A which was declared null and void a few years ago. It was replaced with section 124A if memory serves right and it has more checks and balances.

Danish Siddiqui, Pulitzer award-winning and death in Afghanistan Before I start with Danish Siddiqui, let me share an anecdote that I think I have shared on the blog years ago about how photojournalists are. Again, those who know me and those who follow me know how much I am mad both about trains and planes (civil aviation). A few months back, I had shared a blog post about some of the biggest railway systems in the world which shows that privatization of Railways doesn t necessarily lead to up-gradation of services but definitely leads to an increase in tariff/fares. Just had a conversation couple of days ago on Twitter and realized that need to also put a blog post about civil aviation in India and the problems it faces, but I digress. This was about a gentleman who wanted to take a photo of a particular train coming out of a valley at a certain tunnel at two different heights, one from below and one from above the train. This was several years ago, and while I did share that award-winning photograph then, it probably would take me quite a bit of time and effort to again look it up on my blog and share. The logistics though were far more interesting and intricate than I had first even thought of. We came around a couple of days before the train was supposed to pass that tunnel and the valley. More than half a dozen or maybe more shots were taken throughout the day by the cameras. The idea was to see how much light was being captured by the cameras and how much exposure was to be given so that the picture isn t whitened out or is too black. Weather is the strangest of foes for a photojournalist or even photographers, and the more you are in nature, the more unpredictable it is and can be. We were also at a certain height, so care had to be taken in case light rainfall happens or dew falls, both not good for digital cameras. And dew is something which will happen regardless of what you want. So while the two days our gentleman cameraman fiddled with the settings to figure out correct exposure settings, we had one other gentleman who was supposed to take the train from an earlier station and apprise us if the train was late or not. The most ideal time would be at 0600 hrs. When the train would enter the tunnel and come out and the mixture of early morning sun rays, dew, the flowers in the valley, and the train would give a beautiful effect. We could stretch it to maybe 0700 hrs. Anything after that would just be useless, as it wouldn t have the same effect. And of all this depended on nature. If the skies were to remain too dark, nothing we could do about it, if the dewdrops didn t fall it would all be over. On the day of the shoot, we were told by our compatriot that the train was late by half an hour. We sank a little on hearing that news. Although Photoshop and others can do touch-ups, most professionals like to take as authentic a snap as possible. Everything had been set up to perfection. The wide-angle lenses on both the cameras with protections were set up. The tension you could cut with a knife. While we had a light breakfast, I took a bit more and went in the woods to shit and basically not be there. This was too tensed up for me. Returned an hour to find everybody in a good mood. Apparently, the shoot went well. One of the two captured it for good enough. Now, this is and was in a benign environment where the only foe was the environment. A bad shot would have meant another week in the valley, something which I was not looking forward to. Those who have lived with photographers and photojournalists know how self-involved they can be in their craft, while how grumpy they can be if they had a bad shoot. For those, who don t know, it is challenging to be friends with such people for a long time. I wish they would scream more at nature and let out the frustrations they have after a bad shoot. But again, this is in a very safe environment. Now let s cut to Danish Siddiqui and the kind of photojournalism he followed. He followed a much more riskier sort of photojournalism than the one described above. Krittivas Mukherjee in his Twitter thread shared how reporters in most advanced countries are trained in multiple areas, from risk assessment to how to behave in case you are kidnapped, are in riots, hostage situations, etc. They are also trained in all sorts of medical training from treating gunshot wounds, CPR, and other survival methods. They are supposed to carry medical equipment along with their photography equipment. Sadly, these concepts are unknown in India. And even then they get killed. Sadly, he attributes his death to the thrill of taking an exclusive photograph. And the gentleman s bio reads that he is a diplomat. Talk about tone-deafness  On another completely different level was Karen Hao who was full of empathy as she shared the humility, grace, warmth and kinship she describes in her interaction with the photojournalist. His body of work can be seen via his ted talk in 2020 where he shared a brief collage of his works. Latest, though in a turnaround, the Taliban have claimed no involvement in the death of photojournalist Danish Siddiqui. This could be in part to show the Taliban in a more favorable light as they do and would want to be showcased as progressive, even though they are forcing that all women within a certain age become concubines or marry the fighters and killing the minority Hazaras or doing vile deeds with them. Meanwhile, statements made by Hillary Clinton almost a decade, 12 years ago have come back into circulation which stated how the U.S. itself created the Taliban to thwart the Soviet Union and once that job was finished, forgot all about it. And then in 2001, it landed back in Afghanistan while the real terrorists were Saudi. To date, not all documents of 9/11 are in the public domain. One can find more information of the same here. This is gonna take probably another few years before Saudi Arabia s whole role in the September 11 attacks will be known. Last but not the least, came to know about the Pegasus spyware and how many prominent people in some nations were targeted, including in mine India. Will not talk more as it s already a big blog post and Pegasus revelations need an article on its own.

17 July 2021

Dirk Eddelbuettel: ttdo 0.0.7: Micro-tweak

A new (and genuinely) minor release of our ttdo package arrived on CRAN today. The ttdo package extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs (as shown in the screenshot below) which seemingly is so compelling an idea that it eventually got copied by another package ttdo screenshot This release cleans up one microscopic wart of an R warning when installing and byte-compiling the package due to a sprintf call with an unused argument. And once again, this release gets a #ThankYouCRAN mark as it was processed in a fully automated and intervention-free manner in a matter of minutes. As usual, the NEWS entry follows.

Changes in ttdo version 0.0.8 (2021-07-17)
  • Expand sprintf template to suppress R warning

CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 October 2020

Ulrike Uhlig: Better handling emergencies

We all know these situations when we receive an email asking Can you check the design of X, I need a reply by tonight. Or an instant message: My website went down, can you check? Another email: I canceled a plan at the hosting company, can you restore my website as fast as possible? A phone call: The TLS certificate didn t get updated, and now we can t access service Y. Yet another email: Our super important medical advice website is suddenly being censored in country Z, can you help? Everyone knows those messages that have URGENT in capital letters in the email subject. It might be that some of them really are urgent. Others are the written signs of someone having a hard time properly planning their own work and passing their delays on to someone who comes later in the creation or production chain. And others again come from people who are overworked and try to delegate some of their tasks to a friendly soul who is likely to help.

How emergencies create more emergencies In the past, my first reflex when I received an urgent request was to start rushing into solutions. This happened partly out of empathy, partly because I like to be challenged into solving problems, and I m fairly good at that. This has proven to be unsustainable, and here is why.

Emergencies create unplanned work The first issue is that emergencies create a lot of unplanned work. Which in turn means not getting other, scheduled, things done. This can create a backlog, end up in working late, or working on weekends.

Emergencies can create a permanent state of exception Unplanned work can also create a lot of frustration, out of the feeling of not getting the things done that one planned to do. We might even get a feeling of being nonautonomous (in German I would say fremdbestimmt, which roughly translates to being directed by others ). On the long term, this can generate unsustainable situations: higher work loads, and burnout. When working in a team of several people, A might have to take over the work of B because B has not enough capacities. Then A gets overloaded in turn, and C and D have to take over A s work. Suddenly the team is stuck in a permanent state of exception. This state of exception will produce more backlog. The team might start to deprioritize social issues over getting technical things done. They might not be able to recruit new people anymore because they have no capacity left to onboard newcomers.

One emergency can result in a variety of emergencies for many people The second issue produced by urgent requests is that if I cannot solve the initial emergency by myself, I might try to involve colleagues, other people who are skilled in the area, or people who work in another relevant organization to help with this. Suddenly, the initial emergency has become my emergency as well as the emergency of a whole bunch of other people.

A sidenote about working with friends This might be less of an issue in a classical work setup than in a situation in which a bunch of freelancers work together, or in setups in which work and friendships are intertwined. This is a problem, because the boundaries between friend and worker role, and the expectations that go along with these roles, can get easily confused. If a colleague asks me to help with task X, I might say no; if a friend asks, I might be less likely to say no.

What I learnt about handling emergencies I came up with some guidelines that help me to better handle emergencies.

Plan for unplanned work It doesn t matter and it doesn t help to distinguish if urgent requests are legitimate or if they come from people who have not done their homework on time. What matters is to make one s weekly todo list sustainable. After reading Making work visible by Domenica de Grandis, I understood the need to add free slots for unplanned work into one s weekly schedule. Slots for unplanned work can take up to 25% of the total work time!

Take time to make plans Now that there are some free slots to handle emergencies, one can take some time to think when an urgent request comes in. A German saying proposes to wait and have some tea ( abwarten und Tee trinken ). I think this is actually really good advice, and works for any non-obvious problem. Sit down and let the situation sink in. Have a tea, take a shower, go for a walk. It s never that urgent. Really, never. If possible, one can talk about the issue with another person, rubberduck style. Then one can make a plan on how to address the emergency properly, it could be that the solution is easier than at first thought.

Affirming boundaries: Saying no Is the emergency that I m asked to solve really my problem? Or is someone trying to involve me because they know I m likely to help? Take a deep breath and think about it. No? It s not my job, not my role? I have no time for this right now? I don t want to do it? Maybe I m not even paid for it? A colleague is pushing my boundaries to get some task on their own todo list done? Then I might want to say no. I can t help with this. or I can help you in two weeks. I don t need to give a reason. No. is a sentence. And: Saying no doesn t make me an arse.

Affirming boundaries: Clearly defining one s role Clearly defining one s role is something that is often overlooked. In many discussions I have with friends it appears that this is a major cause of overwork and underpayment. Lots of people are skilled, intelligent, and curious, and easily get challenged into putting on their super hero dress. But they re certainly not the only person that can help even if an urgent request makes them think that at first. To clearly define our role, we need to make clear which part of the job is our work, and which part needs to be done by other people. We should stop trying to accomodate people and their requests to the detriment of our own sanity. You re a language interpreter and are being asked to mediate a bi-lingual conflict between the people you are interpreting for? It s not your job. You re the graphic designer for a poster, but the text you ve been given is not good enough? Send back a recommendation to change the text; don t do these changes yourself: it s not your job. But you can and want to do this yourself and it would make your client s life easier? Then ask them to get paid for the extra time, and make sure to renegotiate your deadline!

Affirming boundaries: Defining expectations Along with our role, we need to define expectations: in which timeframe am I willing to do the job? Under which contract, which agreement, which conditions? For which payment? People who work in a salary office job generally do have a work contract in which their role and the expectations that come with this role are clearly defined. Nevertheless, I hear from friends that their superiors regularly try to make them do tasks that are not part of their role definition. So, here too, role and expectations sometimes need to be renegotiated, and the boundaries of these roles need to be clearly affirmed.

Random conclusive thoughts If you ve read until here, you might have experienced similar things. Or, on the contrary, maybe you re already good at communicating your boundaries and people around you have learnt to respect them? Congratulations. In any case, for improving one s own approach to such requests, it can be useful to find out which inner dynamics are at play when we interact with other people. Additionally, it can be useful to understand the differences between Asker and Guesser culture:
when an Asker meets a Guesser, unpleasantness results. An Asker won t think it s rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor or just an Asker, who s assuming you might decline. If you re a Guesser, you ll hear it as an expectation.
Askers should also be aware that there might be Guessers in their team. It can help to define clear guidelines about making requests (when do I expect an answer, under which budget/contract/responsibility does the request fall, what other task can be put aside to handle the urgent task?) Last, but not least, Making work visible has a lot of other proposals on how to visibilize and then deal with unplanned work.

15 July 2020

Jonathan Dowland: Lockdown music

Last Christmas, to make room for a tree, I dis-assembled my hifi unit and (temporarily, I thought) lugged my hi-fi and records up to the study.
Power, Corruption & Lies
I had been thinking about expanding the amount of storage I had for hifi and vinyl, perhaps moving from 2x2 storage cubes to 2x3, although Ikea don't make a 2x3 version of their stalwart vinyl storage line, the Kallax. I had begun exploring other options, both at Ikea and other places. Meanwhile, I re-purposed my old Expedit unit as storage for my daughter's Sylvanian Families. Under Lockdown, I've spent a lot more time in my study, and so I've set up the hifi there. It turns out I have a lot more opportunity to enjoy the records up here, during work, and I've begun to explore some things which I haven't listened to in a long time, or possibly ever. I thought I'd start keeping track of some of them. Power, Corruption and Lies is not something rarely listened to. It's steadily become my favourite New Order album. When I came across this copy (Factory, 1983), it was in pristine condition, but it now bears witness to my (mostly careful) use. There's now a scratch somewhere towards the end of the first track Age of Consent which causes my turntable to loop. By some good fortune the looping point is perfectly aligned to a bar. I don't always notice it straight away. This record rarely makes it back down from the turntable to where it's supposed to live.

6 July 2020

Reproducible Builds: Reproducible Builds in June 2020

Welcome to the June 2020 report from the Reproducible Builds project. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds? One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News The GitHub Security Lab published a long article on the discovery of a piece of malware designed to backdoor open source projects that used the build process and its resulting artifacts to spread itself. In the course of their analysis and investigation, the GitHub team uncovered 26 open source projects that were backdoored by this malware and were actively serving malicious code. (Full article) Carl Dong from Chaincode Labs uploaded a presentation on Bitcoin Build System Security and reproducible builds to YouTube: The app intended to trace infection chains of Covid-19 in Switzerland published information on how to perform a reproducible build. The Reproducible Builds project has received funding in the past from the Open Technology Fund (OTF) to reach specific technical goals, as well as to enable the project to meet in-person at our summits. The OTF has actually also assisted countless other organisations that promote transparent, civil society as well as those that provide tools to circumvent censorship and repressive surveillance. However, the OTF has now been threatened with closure. (More info) It was noticed that Reproducible Builds was mentioned in the book End-user Computer Security by Mark Fernandes (published by WikiBooks) in the section titled Detection of malware in software. Lastly, reproducible builds and other ideas around software supply chain were mentioned in a recent episode of the Ubuntu Podcast in a wider discussion about the Snap and application stores (at approx 16:00).

Distribution work In the ArchLinux distribution, a goal to remove .doctrees from installed files was created via Arch s TODO list mechanism. These .doctree files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default. Dimitry Andric was able to determine why the reproducibility status of FreeBSD s base.txz depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler [ ]. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [ ]. For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan janneke Nieuwenhuizen posted that they had further reduced their bootstrap seed to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors. In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian Holger Levsen filed three bugs (#961857, #961858 & #961859) against the reproducible-check tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [ ][ ][ ]. Timo R hling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. [ ] Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system s approach this [ ]. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds. 47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates issue [ ] and updated the paths_vary_due_to_usrmerge as deterministic issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path and gcc_captures_build_path issues. [ ][ ][ ]. Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild Debian package rebuilder tool (#961861, #961862 & #961864).

Development On our website this month, Arnout Engelen added a link to our Mastodon account [ ] and moved the SOURCE_DATE_EPOCH git log example to another section [ ]. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [ ]. strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper compatibility level to 13 [ ] and adjusted a related dependency to avoid potential circular dependency [ ].

Upstream work The Reproducible Builds project attempts to fix unreproducible packages and we try to to send all of our patches upstream. This month, we wrote a large number of such patches including: Bernhard M. Wiedemann also filed reports for frr (build fails on single-processor machines), ghc-yesod-static/git-annex (a filesystem ordering issue) and ooRexx (ASLR-related issue).

diffoscope diffoscope is our in-depth diff-on-steroids utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless binary diffs. This month, Chris Lamb uploaded versions 147, 148 and 149 to Debian and made the following changes:
  • New features:
    • Add output from strings(1) to ELF binaries. (#148)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:
    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. [ ]
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). [ ]
  • Output improvements:
    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an info level warning. (#29)
  • Logging improvements:
  • Testsuite improvements:
    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. [ ]
    • Don t mask an existing test. [ ]
  • Codebase improvements:
    • Replace obscure references to WF with Wagner-Fischer for clarity. [ ]
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of missing files. [ ]
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. [ ]
    • Drop a large number of unused imports. [ ][ ][ ][ ][ ]
    • Make many code sections more Pythonic. [ ][ ][ ][ ]
    • Prevent some variable aliasing issues. [ ][ ][ ]
    • Use some tactical f-strings to tidy up code [ ][ ] and remove explicit u"unicode" strings [ ].
    • Refactor a large number of routines for clarity. [ ][ ][ ][ ]
trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb also corrected the location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12) In addition Jean-Romain Garnier made the following changes:
  • Fix the --new-file option when comparing directories by merging DirectoryContainer.compare and Container.compare. (#180)
  • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
  • Make child pages open in new window in the --html-dir presenter format. [ ]
  • Improve the diffs in the --html-dir format. [ ][ ]
Lastly, Daniel Fullmer fixed the Coreboot filesystem comparator [ ] and Mattia Rizzolo prevented warnings from the tlsh fuzzy-matching library during tests [ ] and tweaked the build system to remove an unwanted .build directory [ ]. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [ ] and later 148 [ ].

Testing framework We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:
  • Debian-related changes:
    • Prevent bogus failure emails from rsync2buildinfos.debian.net every night. [ ]
    • Merge a fix from David Bremner s database of .buildinfo files to include a fix regarding comparing source vs. binary package versions. [ ]
    • Only run the Debian package rebuilder job twice per day. [ ]
    • Increase bullseye scheduling. [ ]
  • System health status page:
    • Add a note displaying whether a node needs to be rebooted for a kernel upgrade. [ ]
    • Fix sorting order of failed jobs. [ ]
    • Expand footer to link to the related Jenkins job. [ ]
    • Add archlinux_html_pages, openwrt_rebuilder_today and openwrt_rebuilder_future to known broken jobs. [ ]
    • Add HTML <meta> header to refresh the page every 5 minutes. [ ]
    • Count the number of ignored jobs [ ], ignore permanently known broken jobs [ ] and jobs on known offline nodes [ ].
    • Only consider the known offline status from Git. [ ]
    • Various output improvements. [ ][ ]
  • Tools:
    • Switch URLs for the Grml Live Linux and PureOS package sets. [ ][ ]
    • Don t try to build a disorderfs Debian source package. [ ][ ][ ]
    • Stop building diffoscope as we are moving this to Salsa. [ ][ ]
    • Merge several is diffoscope up-to-date on every platform? test jobs into one [ ] and fail less noisily if the version in Debian cannot be determined [ ].
In addition: Marcus Hoffmann was added as a maintainer of the F-Droid reproducible checking components [ ], Jelle van der Waa updated the is diffoscope up-to-date in every platform check for Arch Linux and diffoscope [ ], Mattia Rizzolo backed up a copy of a remove script run on the Codethink-hosted jump server [ ] and Vagrant Cascadian temporarily disabled the fixfilepath on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map categorised issue. Lastly, the usual build node maintenance was performed by Holger Levsen [ ][ ], Mattia Rizzolo [ ] and Vagrant Cascadian [ ][ ][ ][ ][ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

10 June 2020

Jonathan Dowland: Template Haskell and Stream-processing programs

I've written about what Template Haskell is, and given an example of what it can be used for, it's time to explain why I was looking at it in the context of my PhD work. Encoding stream-processing programs StrIoT is an experimental distributed stream-processing system that myself and others are building in order to explore our research questions. A user of StrIoT writes a stream-processing program, using a set of 8 functional operators provided for the purpose. A simple example is
streamFn :: Stream Int -> Stream Int
streamFn = streamFilter (<15)
         . streamFilter (>5)
         . streamMap (*2)
Our system is distributed: we take a stream-processing program and partition it into sub-programs, which are distributed to and run on separate nodes (perhaps cloud instances, or embedded devices like Raspberry Pis etc.). In order to do that, we need to be able to manipulate the stream-processing program as data. We've initially opted for a graph data-structure, with the vertices in the graph defined as
data StreamVertex = StreamVertex
      vertexId   :: Int
    , operator   :: StreamOperator
    , parameters :: [String]
    , intype     :: String
    , outtype    :: String
      deriving (Eq,Show)
A stream-processing program encoded this way, equivalent to the first example
path [ StreamVertex 0 Map    ["(*2)"]  "Int" "Int"
     , StreamVertex 1 Filter ["(>5)"]  "Int" "Int"
     , StreamVertex 2 Filter ["(<15)"] "Int" "Int"
     ]
We can easily manipulate instances of such types, rewrite them, partition them and generate code from them. Unfortunately, this is quite a departure from the first simple code example from the perspective of a user writing their program. Template Haskell gives us the ability to manipulate code as a data structure, and also to inspect names to gather information about them (their type, etc.). I started looking at TH to see if we could build something where the user-supplied program was as close to that first case as possible. TH limitations There are two reasons that we can't easily manipulate a stream-processing definition written as in the first example. The following expressions are equivalent, in some sense, but are not equal, and so yield completely different expression trees when quasi-quoted:
[  streamFilter (<15) . streamFilter (>5) . streamMap (*2)  ]
  \s -> streamFilter (<15) (streamFilter (>5) (streamMap (*2) s))  ]
[  streamMap (*2) >>> streamFilter (>5) >>> streamFilter (<15)  ]
[  \s -> s & streamMap (*2) & streamFilter (>5) & streamFilter (<15)  ]
[  streamFn  ] -- a named expression, defined outside the quasi-quotes
In theory, reify can give you the definition of a function from its name, but in practice it doesn't, because this was never implemented. So at the very least we would need to insist that a user included the entirety of a stream-processing program within quasi-quotes, and not split it up into separate bits, with some bits defined outside the quotes and references within (as in the last case above). We would probably have to insist on a consistent approach for composing operators together, such as always use (.) and never >>>, &, etc. which is limiting. Incremental approach After a while ruminating on this, and before moving onto something else, I thought I'd try approaching it from the other side. Could I introduce some TH into the existing approach, and improve it? The first thing I've tried is to change the parameters field to TH's ExpQ, meaning the map instance example above would be
StreamVertex 0 Map [ [  (*2)  ] ] "Int" "Int"
I worked this through. It's an incremental improvement ease and clarity for the user writing a stream-processing program. It catches a class of programming bugs that would otherwise slip through: the expressions in the brackets have to be syntactically valid (although they aren't type checked). Some of the StrIoT internals are also much improved, particularly the logical operator. Here's an excerpt from a rewrite rule that involves composing code embedded in strings, dealing with all the escaping rules and hoping we've accounted for all possible incoming expression encodings:
let f' = "(let f = ("++f++"); p = ("++p++"); g = ("++g++") in\
         \ \\ (a,b) v -> (f a v, if p v a then g b v else b))"
    a' = "("++a++","++b++")"
    q' = "(let p = ("++p++"); q = ("++q++") in \\v (y,z) -> p v y && q v z)"
And the same section after, manipulating ExpQ types:
let f' = [  \ (a,b) v -> ($(f) a v, if $(p) v a then $(g) b v else b)  ]
    a' = [  ($(a), $(b))  ]
    q' = [  \v (y,z) -> $(p) v y && $(q) v z  ]
I think the code-generation part of StrIoT could be radically refactored to take advantage of this change but I have not made huge inroads into that. Next steps This is, probably, where I am going to stop. This work is very interesting to me but not the main thrust of my research. But incrementally improving the representation gave me some ideas of what I could try next: The type would have collapsed down to
data StreamVertex = StreamVertex
      vertexId   :: Int
    , opAndParams :: ExpQ
      deriving (Eq,Show)
Example instances might be
StreamVertex 0 [  streamMap (*2)  ]
StreamVertex 1 [  streamExpand  ]
StreamVertex 2 [  streamScan (\c _ -> c+1) 0  ]
The vertexId field is a bit of wart, but we require that due to the graph data structure that we are using. A change there could eliminate it, too. By this point we are not that far away from where we started, and certainly much closer to the "pure" function application in the very first example.

8 May 2020

Petter Reinholdtsen: Jami as a Zoom client, a trick for password protected rooms...

Half a year ago, I wrote about the Jami communication client, capable of peer-to-peer encrypted communication. It handle both messages, audio and video. It uses distributed hash tables instead of central infrastructure to connect its users to each other, which in my book is a plus. I mentioned briefly that it could also work as a SIP client, which came in handy when the higher educational sector in Norway started to promote Zoom as its video conferencing solution. I am reluctant to use the official Zoom client software, due to their copyright license clauses prohibiting users to reverse engineer (for example to check the security) and benchmark it, and thus prefer to connect to Zoom meetings with free software clients. Jami worked OK as a SIP client to Zoom as long as there was no password set on the room. The Jami daemon leak memory like crazy (approximately 1 GiB a minute) when I am connected to the video conference, so I had to restart the client every 7-10 minutes, which is not a great. I tried to get other SIP Linux clients to work without success, so I decided I would have to live with this wart until someone managed to fix the leak in the dring code base. But another problem showed up once the rooms were password protected. I could not get my dial tone signaling through from Jami to Zoom, and dial tone signaling is used to enter the password when connecting to Zoom. I tried a lot of different permutations with my Jami and Asterisk setup to try to figure out why the signaling did not get through, only to finally discover that the fundamental problem seem to be that Zoom is simply not able to receive dial tone signaling when connecting via SIP. There seem to be nothing wrong with the Jami and Asterisk end, it is simply broken in the Zoom end. I got help from a very skilled VoIP engineer figuring out this last part. And being a very skilled engineer, he was also able to locate a solution for me. Or to be exact, a workaround that solve my initial problem of connecting to password protected Zoom rooms using Jami. So, how do you do this, I am sure you are wondering by now. The trick is already documented from Zoom, and it is to modify the SIP address to include the room password. What is most surprising about this is that the automatically generated email from Zoom with instructions on how to connect via SIP do not mention this. The SIP address to use normally consist of the room ID (a number), an @ character and the IP address of the Zoom SIP gateway. But Zoom understand a lot more than just the room ID in front of the at sign. The format is "[Meeting ID].[Password].[Layout].[Host Key]", and you can hear see how you can both enter password, control the layout (full screen, active presence and gallery) and specify the host key to start the meeting. The full SIP address entered into Jami to provide the password will then look like this (all using made up numbers):
sip:657837644.522827@192.168.169.170
Now if only jami would reduce its memory usage, I could even recommend this setup to others. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30 March 2020

Louis-Philippe V ronneau: Using Zoom's web client on Linux

TL;DR: The zoom meeting link you have probably look like this:
https://zoom.us/j/123456789
To use the web client, use this instead:
https://zoom.us/wc/join/123456789
Avant-propos Like too many institutions, the school where I teach chose to partner up with Zoom. I wasn't expecting anything else, as my school's IT department is a Windows shop. Well, I guess I'm still a little disappointed. Although I had vaguely heard of Zoom before, I had never thought I'd be forced to use it. Lucky for me, my employer decided not to force us to use it. To finish the semester, I plan to record myself and talk with my students on a Jitsi Meet instance. I will still have to attend meetings on Zoom though. I'm well aware of Zoom's bad privacy record and I will not install their desktop application. Zoom does offer a web client. Sadly, on Linux you need to jump through hoops to be able to use it. Using Zoom's web client on Linux Zoom's web client apparently works better on Chrome, so I decided to use Chromium. Without already having the desktop client installed on your machine, the standard procedure to use the web client would be:
  1. Open the link to the meeting in Chromium
  2. Click on the "download & run Zoom" link showed on the page
  3. Click on the "join from your browser" link that then shows up
Sadly, that's not what happens on Linux. When you click on the "download & run Zoom" link, it brings you to a page with instructions on how to install the desktop client on Linux. You can thwart that stupid behavior by changing your browser's user agent to make it look like you are using Windows. This is the UA string I've been using:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36
With that, when you click on the "download & run Zoom" link, it will try to download a .exe file. Cancel the download and you should now see the infamous "join from your browser" link. Upon closer inspection, it seem you can get to the web client by changing the meeting's URL. The zoom meeting link you have probably look like this:
https://zoom.us/j/123456789
To use the web client, use this instead:
https://zoom.us/wc/join/123456789
Jitsi Meet Puppet Module I've been playing around with Jitsi Meet quite a bit recently and I've written a Puppet module to install and configure an instance! The module certainly isn't perfect, but should wield a working Jitsi instance. If you already have a Puppet setup, please give it a go! I'm looking forward receiving feedback (and patches) to improve it.

15 March 2020

Andreas Metzler: balance sheet snowboarding season 2019/20

Looking at the date I am posting this it will not come as a surprise that the season was not a strong one. For pre-opening I again booked a carving fresh-up workshop with Pure Boarding in Pitztal (November 21 to November 24). Since the resort extends to more than 3400m of atitude the weather can be harsh. This year we had strong southern winds and moved to Riffelsee for Saturday, which turned out to be a really nice resort. Natural snow turned out to be a rare commodity this year, but nevertheless I only rode at Diedamskopf where they rely on tnatural snow almost eclusively. They again managed to prepare quality slopes with natural snow. First day on snow was 15 December followed by a long pause until December 28. Having loads of unused vacation days I did three- or four-day work weeks in January and start of February whenever work (thanks, colleagues!) weather and conditions were nice. In hindsight this saved the season. Since resorts have closed yesterday or will close today due to COVID-19 I am missing out on riding in Warth/Salober in spring. While the number of a days is a low number, the quality of riding was still high. Health-wise jumper's knee has kept me from finally really improving on my backside turn. Well here is the balance-sheet, excluding pre-opening in Pitztal:
2019/20 2018/19 2017/18 2016/17 2015/16 2014/15 2013/14 2012/13 2011/12 2010/11 2009/10 2008/09 2007/08 2006/07 2005/06
number of (partial) days 21 33 29 30 17 24 30 23 25 30 30 37 29 17 25
Dam ls 0 2 8 4 4 9 29 4 10 23 16 10 5 10 10
Diedamskopf 21 26 19 23 12 13 1 19 14 4 13 23 24 4 15
Warth/Schr cken 0 5 2 3 1 2 0 0 1 3 1 4 0 3 0
total meters of altitude 160797 296308 266158 269819 138037 224909 274706 203562 228588 203918 202089 226774 219936 74096 124634
highscore 9375 10850 11116 12245 11015 13278 12848m 13885m 13076m 10976m 11888m 11272m 12108m 8321m 10247m
# of runs 374 701 616 634 354 530 597 468 516 449 462 551 503 189 309

17 November 2017

Jonathan Carter: I am now a Debian Developer

It finally happened On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montr al, my application was approved. If you re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it. How it started In 1999 no wait, I can t start there, as much as I want to, this is a short post, so In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its boot-floppies installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn t want to support Windows ever again, and I didn t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called freshmeat that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again. Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up highvoltage@debian.org . I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :) Ubuntu and beyond

Ubuntu 4.10 default desktop Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just warty written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called Ubuntu and the desktop edition has naked people on it. I wasn t sure what he meant and was kind of dumbfounded so I just laughed and said something like Uh ok . At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline Linux for Human Beings . Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal. Closer to Ubuntu s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying Go talk to them! Go talk to them! , but I felt so intimidated by them that I couldn t even bring myself to walk up and say hello. In the interest of keeping this short, I m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I d like to do a similar video series that might help a new generation of packagers. I ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

12 November 2017

Ben Armstrong: The Joy of Cat Intelligence

As a cat owner, being surprised by cat intelligence delights me. They re not exactly smart like a human, but they are smart in cattish ways. The more I watch them and try to sort out what they re thinking, the more it pleases me to discover they can solve problems and adapt in recognizably intelligent ways, sometimes unique to each individual cat. Each time that happens, it evokes in me affectionate wonder. Today, I had one of those joyful moments. First, you need to understand that some months ago, I thought I had my male cat all figured out with respect to mealtimes. I had been cleaning up after my oafish boy who made a watery mess on the floor from his mother s bowl each morning. I was slightly annoyed, but was mostly curious, and had a hunch. A quick search of the web confirmed it: my cat was left-handed. Not only that, but I learned this is typical for males, whereas females tend to be right-handed. Right away, I knew what I had to do: I adjusted the position of their water bowls relative to their food, swapping them from right to left; the messy morning feedings ceased. I congratulated myself for my cleverness. You see, after the swap, as he hooked the kibbles with his left paw out of the right-hand bowl, they would land immediately on the floor where he could give them chase. The swap caused the messes to cease because before, his left-handed scoops would land the kibbles in the water to the right; he would then have to scoop the kibble out onto the floor, sprinkling water everywhere! Furthermore, the sodden kibble tended to not skitter so much, decreasing his fun. Or so I thought. Clearly, I reasoned, having sated himself on the entire contents of his own bowl, he turned to pilfering his mother s leftovers for some exciting kittenish play. I had evidence to back it up, too: he and his mother both seem to enjoy this game, a regular fixture of their mealtime routines. She, too, is adept at hooking out the kibbles, though mysteriously, without making a mess in her water, whichever way the bowls are oriented. I chalked this up to his general clumsiness of movement vs. her daintiness and precision, something I had observed many times before. Come to think of it, lately, I ve been seeing more mess around his mother s bowl again. Hmm. I don t know why I didn t stop to consider why And then my cat surprised me again. This morning, with Shadow behind my back as I sat at my computer, finishing up his morning meal at his mother s bowl, I thought I heard something odd. Or rather, I didn t hear something. The familiar skitter-skitter sound of kibbles evading capture was missing. So I turned and looked. My dear, devious boy had squished his overgrown body behind his mother s bowls, nudging them ever so slightly askew to fit the small space. Now the bowl orientation was swapped back again. Stunned, I watched him carefully flip out a kibble with his left paw. Plop! Into the water on the right. Concentrating, he fished for it. A miss! He casually licked the water from his paw. Another try. Swoop! Plop, onto the floor. No chase now, just satisfied munching of his somewhat mushy kibble. And then it dawned on me that I had got it somewhat wrong. Yes, he enjoyed Chase the Kibble, like his mom, but I never recognized he had been indulging in a favourite pastime, peculiarly his own I had judged his mealtime messes as accidents, a very human way of thinking about my problem. Little did I know, it was deliberate! His private game was Bobbing for Kibbles. I don t know if it s the altered texture, or dabbling in the bowl, but whatever the reason, due to my meddling, he had been deprived of this pleasure. No worries, a thwarted cat will find a way. And that is the joy of cat intelligence.

2 October 2017

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

Antoine Beaupr : Strategies for offline PGP key storage

While the adoption of OpenPGP by the general population is marginal at best, it is a critical component for the security community and particularly for Linux distributions. For example, every package uploaded into Debian is verified by the central repository using the maintainer's OpenPGP keys and the repository itself is, in turn, signed using a separate key. If upstream packages also use such signatures, this creates a complete trust path from the original upstream developer to users. Beyond that, pull requests for the Linux kernel are verified using signatures as well. Therefore, the stakes are high: a compromise of the release key, or even of a single maintainer's key, could enable devastating attacks against many machines. That has led the Debian community to develop a good grasp of best practices for cryptographic signatures (which are typically handled using GNU Privacy Guard, also known as GnuPG or GPG). For example, weak (less than 2048 bits) and vulnerable PGPv3 keys were removed from the keyring in 2015, and there is a strong culture of cross-signing keys between Debian members at in-person meetings. Yet even Debian developers (DDs) do not seem to have established practices on how to actually store critical private key material, as we can see in this discussion on the debian-project mailing list. That email boiled down to a simple request: can I have a "key dongles for dummies" tutorial? Key dongles, or keycards as we'll call them here, are small devices that allow users to store keys on an offline device and provide one possible solution for protecting private key material. In this article, I hope to use my experience in this domain to clarify the issue of how to store those precious private keys that, if compromised, could enable arbitrary code execution on millions of machines all over the world.

Why store keys offline? Before we go into details about storing keys offline, it may be useful to do a small reminder of how the OpenPGP standard works. OpenPGP keys are made of a main public/private key pair, the certification key, used to sign user identifiers and subkeys. My public key, shown below, has the usual main certification/signature key (marked SC) but also an encryption subkey (marked E), a separate signature key (S), and two authentication keys (marked A) which I use as RSA keys to log into servers using SSH, thanks to the Monkeysphere project.
    pub   rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
      8DC901CE64146C048AD50FBB792152527B75921E
    uid                 [ultimate] Antoine Beaupr  <anarcat@anarc.at>
    uid                 [ultimate] Antoine Beaupr  <anarcat@koumbit.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@orangeseeds.org>
    uid                 [ultimate] Antoine Beaupr  <anarcat@debian.org>
    sub   rsa2048/B7F648FED2DF2587 2012-07-18 [A]
    sub   rsa2048/604E4B3EEE02855A 2012-07-20 [A]
    sub   rsa4096/A51D5B109C5A5581 2009-05-29 [E]
    sub   rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub) and identities (uid) are bound by the main certification key using cryptographic self-signatures. So while an attacker stealing a private subkey can spoof signatures in my name or authenticate to other servers, that key can always be revoked by the main certification key. But if the certification key gets stolen, all bets are off: the attacker can create or revoke identities or subkeys as they wish. In a catastrophic scenario, an attacker could even steal the key and remove your copies, taking complete control of the key, without any possibility of recovery. Incidentally, this is why it is so important to generate a revocation certificate and store it offline. So by moving the certification key offline, we reduce the attack surface on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or signature) can stay online but if they get stolen, the certification key can revoke those keys without having to revoke the main certification key as well. Note that a stolen encryption key is a different problem: even if we revoke the encryption subkey, this will only affect future encrypted messages. Previous messages will be readable by the attacker with the stolen subkey even if that subkey gets revoked, so the benefits of revoking encryption certificates are more limited.

Common strategies for offline key storage Considering the security tradeoffs, some propose storing those critical keys offline to reduce those threats. But where exactly? In an attempt to answer that question, Jonathan McDowell, a member of the Debian keyring maintenance team, said that there are three options: use an external LUKS-encrypted volume, an air-gapped system, or a keycard. Full-disk encryption like LUKS adds an extra layer of security by hiding the content of the key from an attacker. Even though private keyrings are usually protected by a passphrase, they are easily identifiable as a keyring. But when a volume is fully encrypted, it's not immediately obvious to an attacker there is private key material on the device. According to Sean Whitton, another advantage of LUKS over plain GnuPG keyring encryption is that you can pass the --iter-time argument when creating a LUKS partition to increase key-derivation delay, which makes brute-forcing much harder. Indeed, GnuPG 2.x doesn't have a run-time option to configure the key-derivation algorithm, although a patch was introduced recently to make the delay configurable at compile time in gpg-agent, which is now responsible for all secret key operations. The downside of external volumes is complexity: GnuPG makes it difficult to extract secrets out of its keyring, which makes the first setup tricky and error-prone. This is easier in the 2.x series thanks to the new storage system and the associated keygrip files, but it still requires arcane knowledge of GPG internals. It is also inconvenient to use secret keys stored outside your main keyring when you actually do need to use them, as GPG doesn't know where to find those keys anymore. Another option is to set up a separate air-gapped system to perform certification operations. An example is the PGP clean room project, which is a live system based on Debian and designed by DD Daniel Pocock to operate an OpenPGP and X.509 certificate authority using commodity hardware. The basic principle is to store the secrets on a different machine that is never connected to the network and, therefore, not exposed to attacks, at least in theory. I have personally discarded that approach because I feel air-gapped systems provide a false sense of security: data eventually does need to come in and out of the system, somehow, even if only to propagate signatures out of the system, which exposes the system to attacks. System updates are similarly problematic: to keep the system secure, timely security updates need to be deployed to the air-gapped system. A common use pattern is to share data through USB keys, which introduce a vulnerability where attacks like BadUSB can infect the air-gapped system. From there, there is a multitude of exotic ways of exfiltrating the data using LEDs, infrared cameras, or the good old TEMPEST attack. I therefore concluded the complexity tradeoffs of an air-gapped system are not worth it. Furthermore, the workflow for air-gapped systems is complex: even though PGP clean room went a long way, it's still lacking even simple scripts that allow signing or transferring keys, which is a problem shared by the external LUKS storage approach.

Keycard advantages The approach I have chosen is to use a cryptographic keycard: an external device, usually connected through the USB port, that stores the private key material and performs critical cryptographic operations on the behalf of the host. For example, the FST-01 keycard can perform RSA and ECC public-key decryption without ever exposing the private key material to the host. In effect, a keycard is a miniature computer that performs restricted computations for another host. Keycards usually support multiple "slots" to store subkeys. The OpenPGP standard specifies there are three subkeys available by default: for signature, authentication, and encryption. Finally, keycards can have an actual physical keypad to enter passwords so a potential keylogger cannot capture them, although the keycards I have access to do not feature such a keypad. We could easily draw a parallel between keycards and an air-gapped system; in effect, a keycard is a miniaturized air-gapped computer and suffers from similar problems. An attacker can intercept data on the host system and attack the device in the same way, if not more easily, because a keycard is actually "online" (i.e. clearly not air-gapped) when connected. The advantage over a fully-fledged air-gapped computer, however, is that the keycard implements only a restricted set of operations. So it is easier to create an open hardware and software design that is audited and verified, which is much harder to accomplish for a general-purpose computer. Like air-gapped systems, keycards address the scenario where an attacker wants to get the private key material. While an attacker could fool the keycard into signing or decrypting some data, this is possible only while the key is physically connected, and the keycard software will prompt the user for a password before doing the operation, though the keycard can cache the password for some time. In effect, it thwarts offline attacks: to brute-force the key's password, the attacker needs to be on the target system and try to guess the keycard's password, which will lock itself after a limited number of tries. It also provides for a clean and standard interface to store keys offline: a single GnuPG command moves private key material to a keycard (the keytocard command in the --edit-key interface), whereas moving private key material to a LUKS-encrypted device or air-gapped computer is more complex. Keycards are also useful if you operate on multiple computers. A common problem when using GnuPG on multiple machines is how to safely copy and synchronize private key material among different devices, which introduces new security problems. Indeed, a "good rule of thumb in a forensics lab", according to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum personal data possible on your systems". Keycards provide the best of both worlds here: you can use your private key on multiple computers without actually storing it in multiple places. In fact, Mike Gerwitz went as far as saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.

Keycard tradeoffs As Gerwitz hinted, there are multiple downsides to using a keycard, however. Another DD, Wouter Verhelst clearly expressed the tradeoffs:
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc.
"Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.

Quick keycard "howto" Getting keys onto a keycard is easy enough:
  1. Start with a temporary key to test the procedure:
        export GNUPGHOME=$(mktemp -d)
        gpg --generate-key
    
  2. Edit the key using its user ID (UID):
        gpg --edit-key UID
    
  3. Use the key command to select the first subkey, then copy it to the keycard (you can also use the addcardkey command to just generate a new subkey directly on the keycard):
        gpg> key 1
        gpg> keytocard
    
  4. If you want to move the subkey, use the save command, which will remove the local copy of the private key, so the keycard will be the only copy of the secret key. Otherwise use the quit command to save the key on the keycard, but keep the secret key in your normal keyring; answer "n" to "save changes?" and "y" to "quit without saving?" . This way the keycard is a backup of your secret key.
  5. Once you are satisfied with the results, repeat steps 1 through 4 with your normal keyring (unset $GNUPGHOME)
When a key is moved to a keycard, --list-secret-keys will show it as sec> (or ssb> for subkeys) instead of the usual sec keyword. If the key is completely missing (for example, if you moved it to a LUKS container), the # sign is used instead. If you need to use a key from a keycard backup, you simply do gpg --card-edit with the key plugged in, then type the fetch command at the prompt to fetch the public key that corresponds to the private key on the keycard (which stays on the keycard). This is the same procedure as the one to use the secret key on another computer.

Conclusion There are already informal OpenPGP best-practices guides out there and some recommend storing keys offline, but they rarely explain what exactly that means. Storing your primary secret key offline is important in dealing with possible compromises and we examined the main ways of doing so: either with an air-gapped system, LUKS-encrypted keyring, or by using keycards. Each approach has its own tradeoffs, but I recommend getting familiar with keycards if you use multiple computers and want a standardized interface with minimal configuration trouble. And of course, those approaches can be combined. This tutorial, for example, uses a keycard on an air-gapped computer, which neatly resolves the question of how to transmit signatures between the air-gapped system and the world. It is definitely not for the faint of heart, however. Once one has decided to use a keycard, the next order of business is to choose a specific device. That choice will be addressed in a followup article, where I will look at performance, physical design, and other considerations.
This article first appeared in the Linux Weekly News.

23 April 2017

Andreas Metzler: balance sheet snowboarding season 2016/17

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down. I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions. I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School. Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog). Anyway, here is the balance-sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17
number of (partial) days251729373030252330241730
Dam ls1010510162310429944
Diedamskopf154242313414191131223
Warth/Schr cken030413100213
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m132781101512245
# of runs309189503551462449516468597530354634

22 February 2017

Antoine Beaupr : The case against password hashers

In previous articles, we have looked at how to generate passwords and did a review of various password managers. There is, however, a third way of managing passwords other than remembering them or encrypting them in a "vault", which is what I call "password hashing". A password hasher generates site-specific passwords from a single master password using a cryptographic hash function. It thus allows a user to have a unique and secure password for every site they use while requiring no storage; they need only to remember a single password. You may know these as "deterministic or stateless password managers" but I find the "password manager" phrase to be confusing because a hasher doesn't actually store any passwords. I do not think password hashers represent a good security tradeoff so I generally do not recommend their use, unless you really do not have access to reliable storage that you can access readily. In this article, I use the word "password" for a random string used to unlock things, but "token" to represent a generated random string that the user doesn't need to remember. The input to a password hasher is a password with some site-specific context and the output from a password hasher is a token.

What is a password hasher? A password hasher uses the master password and a label (generally the host name) to generate the site-specific password. To change the generated password, the user can modify the label, for example by appending a number. Some password hashers also have different settings to generate tokens of different lengths or compositions (symbols or not, etc.) to accommodate different site-specific password policies. The whole concept of password hashers relies on the concept of one-way cryptographic hash functions or key derivation functions that take an arbitrary input string (say a password) and generate a unique token, from which it is impossible to guess the original input string. Password hashers are generally written as JavaScript bookmarklets or browser plugins and have been around for over a decade. The biggest advantage of password hashers is that you only need to remember a single password. You do not need to carry around a password manager vault: there's no "state" (other than site-specific settings, which can be easily guessed). A password hasher named Master Password makes a compelling case against traditional password managers in its documentation:
It's as though the implicit assumptions are that everybody backs all of their stuff up to at least two different devices and backups in the cloud in at least two separate countries. Well, people don't always have perfect backups. In fact, they usually don't have any.
It goes on to argue that, when you lose your password: "You lose everything. You lose your own identity." The stateless nature of password hashers also means you do not need to use cloud services to synchronize your passwords, as there is (generally, more on that later) no state to carry around. This means, for example, that the list of accounts that you have access to is only stored in your head, and not in some online database that could be hacked without your knowledge. The downside of this is, of course, that attackers do not actually need to have access to your password hasher to start cracking it: they can try to guess your master key without ever stealing anything from you other than a single token you used to log into some random web site. Password hashers also necessarily generate unique passwords for every site you use them on. While you can also do this with password managers, it is not an enforced decision. With hashers, you get distinct and strong passwords for every site with no effort.

The problem with password hashers If hashers are so great, why would you use a password manager? Programs like LessPass and Master Password seem to have strong crypto that is well implemented, so why isn't everyone using those tools? Password hashing, as a general concept, actually has serious problems: since the hashing outputs are constantly compromised (they are sent in password forms to various possibly hostile sites), it's theoretically possible to derive the master password and then break all the generated tokens in one shot. The use of stronger key derivation functions (like PBKDF2, scrypt, or HMAC) or seeds (like a profile-specific secret) makes those attacks much harder, especially if the seed is long enough to make brute-force attacks infeasible. (Unfortunately, in the case of Password Hasher Plus, the seed is derived from Math.random() calls, which are not considered cryptographically secure.) Basically, as stated by Julian Morrison in this discussion:
A password is now ciphertext, not a block of line noise. Every time you transmit it, you are giving away potential clues of use to an attacker. [...] You only have one password for all the sites, really, underneath, and it's your secret key. If it's broken, it's now a skeleton-key [...]
Newer implementations like LessPass and Master Password fix this by using reasonable key derivation algorithms (PBKDF2 and scrypt, respectively) that are more resistant to offline cracking attacks, but who knows how long those will hold? To give a concrete example, if you would like to use the new winner of the password hashing competition (Argon2) in your password manager, you can patch the program (or wait for an update) and re-encrypt your database. With a password hasher, it's not so easy: changing the algorithm means logging in to every site you visited and changing the password. As someone who used a password hasher for a few years, I can tell you this is really impractical: you quickly end up with hundreds of passwords. The LessPass developers tried to facilitate this, but they ended up mostly giving up. Which brings us to the question of state. A lot of those tools claim to work "without a server" or as being "stateless" and while those claims are partly true, hashers are way more usable (and more secure, with profile secrets) when they do keep some sort of state. For example, Password Hasher Plus records, in your browser profile, which site you visited and which settings were used on each site, which makes it easier to comply with weird password policies. But then that state needs to be backed up and synchronized across multiple devices, which led LessPass to offer a service (which you can also self-host) to keep those settings online. At this point, a key benefit of the password hasher approach (not keeping state) just disappears and you might as well use a password manager. Another issue with password hashers is choosing the right one from the start, because changing software generally means changing the algorithm, and therefore changing passwords everywhere. If there was a well-established program that was be recognized as a solid cryptographic solution by the community, I would feel more confident. But what I have seen is that there are a lot of different implementations each with its own warts and flaws; because changing is so painful, I can't actually use any of those alternatives. All of the password hashers I have reviewed have severe security versus usability tradeoffs. For example, LessPass has what seems to be a sound cryptographic implementation, but using it requires you to click on the icon, fill in the fields, click generate, and then copy the password into the field, which means at least four or five actions per password. The venerable Password Hasher is much easier to use, but it makes you type the master password directly in the site's password form, so hostile sites can simply use JavaScript to sniff the master password while it is typed. While there are workarounds implemented in Password Hasher Plus (the profile-specific secret), both tools are more or less abandoned now. The Password Hasher homepage, linked from the extension page, is now a 404. Password Hasher Plus hasn't seen a release in over a year and there is no space for collaborating on the software the homepage is simply the author's Google+ page with no information on the project. I couldn't actually find the source online and had to download the Chrome extension by hand to review the source code. Software abandonment is a serious issue for every project out there, but I would argue that it is especially severe for password hashers. Furthermore, I have had difficulty using password hashers in unified login environments like Wikipedia's or StackExchange's single-sign-on systems. Because they allow you to log in with the same password on multiple sites, you need to choose (and remember) what label you used when signing in. Did I sign in on stackoverflow.com? Or was it stackexchange.com? Also, as mentioned in the previous article about password managers, web-based password managers have serious security flaws. Since more than a few password hashers are implemented using bookmarklets, they bring all of those serious vulnerabilities with them, which can range from account name to master password disclosures. Finally, some of the password hashers use dubious crypto primitives that were valid and interesting a decade ago, but are really showing their age now. Stanford's pwdhash uses MD5, which is considered "cryptographically broken and unsuitable for further use". We have seen partial key recovery attacks against MD5 already and while those do not allow an attacker to recover the full master password yet (especially not with HMAC-MD5), I would not recommend anyone use MD5 in anything at this point, especially if changing that algorithm later is hard. Some hashers (like Password Hasher and Password Plus) use a a single round of SHA-1 to derive a token from a password; WPA2 (standardized in 2004) uses 4096 iterations of HMAC-SHA1. A recent US National Institute of Standards and Technology (NIST) report also recommends "at least 10,000 iterations of the hash function".

Conclusion Forced to suggest a password hasher, I would probably point to LessPass or Master Password, depending on the platform of the person asking. But, for now, I have determined that the security drawbacks of password hashers are not acceptable and I do not recommend them. It makes my password management recommendation shorter anyway: "remember a few carefully generated passwords and shove everything else in a password manager". [Many thanks to Daniel Kahn Gillmor for the thorough reviews provided for the password articles.]
Note: this article first appeared in the Linux Weekly News. Also, details of my research into password hashers are available in the password hashers history article.

Next.

Previous.